Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Personalising decision-making assistance to different users and tasks can improve human-AI team performance, such as by appropriately impacting reliance on AI assistance. However, people are different in many ways, with many hidden qualities, and adapting AI assistance to these hidden qualities is difficult. In this work, we consider a hidden quality previously identified as important: overreliance on AI assistance. We would like to (i) quickly determine the value of this hidden quality, and (ii) personalise AI assistance based on this value. In our first study, we introduce a few probe questions (where we know the true answer) to determine if a user is an overrelier or not, finding that correctly-chosen probe questions work well. In our second study, we improve human-AI team performance, personalising AI assistance based on users’ overreliance quality. Exploratory analysis indicates that people learn different strategies of using AI assistance depending on what AI assistance they saw previously, indicating that we may need to take this into account when designing adaptive AI assistance. We hope that future work will continue exploring how to infer and personalise to other important hidden qualities.more » « lessFree, publicly-accessible full text available December 2, 2026
-
AI-enabled decision-support systems aim to help medical providers rapidly make decisions with limited information during medical emergencies. A critical challenge in developing these systems is supporting providers in interpreting the system output to make optimal treatment decisions. In this study, we designed and evaluated an AI-enabled decision-support system to aid providers in treating patients with traumatic injuries. We first conducted user research with physicians to identify and design information types and AI outputs for a decision-support display. We then conducted an online experiment with 35 medical providers from six health systems to evaluate two human-AI interaction strategies: (1) AI information synthesis and (2) AI information and recommendations. We found that providers were more likely to make correct decisions when AI information and recommendations were provided compared to receiving no AI support. We also identified two socio-technical barriers to providing AI recommendations during time-critical medical events: (1) an accuracy-time trade-off in providing recommendations and (2) polarizing perceptions of recommendations between providers. We discuss three implications for developing AI-enabled decision support used in time-critical events, contributing to the limited research on human-AI interaction in this context.more » « lessFree, publicly-accessible full text available October 18, 2026
-
Free, publicly-accessible full text available March 24, 2026
-
Free, publicly-accessible full text available April 26, 2026
-
Free, publicly-accessible full text available March 24, 2026
-
Free, publicly-accessible full text available April 25, 2026
-
Policymakers have established that the ability to contest decisions made by or with algorithms is core to responsible artificial intelligence (AI). However, there has been a disconnect between research on contestability of algorithms, and what the situated practice of contestation looks like in contexts across the world, especially amongst communities on the margins. We address this gap through a qualitative study of follow-up and contestation in accessing public services for land ownership in rural India and affordable housing in the urban United States. We find there are significant barriers to exercising rights and contesting decisions, which intermediaries like NGO workers or lawyers work with communities to address. We draw on the notion of accompaniment in global health to highlight the open-ended work required to support people in navigating violent social systems. We discuss the implications of our findings for key aspects of contestability, including building capacity for contestation, human review, and the role of explanations. We also discuss how sociotechnical systems of algorithmic decision-making can embody accompaniment by taking on a higher burden of preventing denials and enabling contestation.more » « less
-
LGBTQ+ individuals are increasingly turning to chatbots powered by large language models (LLMs) to meet their mental health needs. However, little research has explored whether these chatbots can adequately and safely provide tailored support for this demographic. We interviewed 18 LGBTQ+ and 13 non-LGBTQ+ participants about their experiences with LLM-based chatbots for mental health needs. LGBTQ+ participants relied on these chatbots for mental health support, likely due to an absence of support in real life. Notably, while LLMs offer prompt support, they frequently fall short in grasping the nuances of LGBTQ-specific challenges. Although fine-tuning LLMs to address LGBTQ+ needs can be a step in the right direction, it isn’t the panacea. The deeper issue is entrenched in societal discrimination. Consequently, we call on future researchers and designers to look beyond mere technical refinements and advocate for holistic strategies that confront and counteract the societal biases burdening the LGBTQ+ community.more » « less
-
In settings where users both need high accuracy and are time-pressured, such as doctors working in emergency rooms, we want to provide AI assistance that both increases decision accuracy and reduces decision-making time. Current literature focusses on how users interact with AI assistance when there is no time pressure, finding that different AI assistances have different benefits: some can reduce time taken while increasing overreliance on AI, while others do the opposite. The precise benefit can depend on both the user and task. In time-pressured scenarios, adapting when we show AI assistance is especially important: relying on the AI assistance can save time, and can therefore be beneficial when the AI is likely to be right. We would ideally adapt what AI assistance we show depending on various properties (of the task and of the user) in order to best trade off accuracy and time. We introduce a study where users have to answer a series of logic puzzles. We find that time pressure affects how users use different AI assistances, making some assistances more beneficial than others when compared to no-time-pressure settings. We also find that a user’s overreliance rate is a key predictor of their behaviour: overreliers and not-overreliers use different AI assistance types differently. We find marginal correlations between a user’s overreliance rate (which is related to the user’s trust in AI recommendations) and their personality traits (Big Five Personality traits). Overall, our work suggests that AI assistances have different accuracy-time tradeoffs when people are under time pressure compared to no time pressure, and we explore how we might adapt AI assistances in this setting.more » « less
-
As dating websites are becoming an essential part of how people meet intimate and romantic partners, it is vital to design these systems to be resistant to, or at least do not amplify, bias and discrimination. Instead, the results of our online experiment with a simulated dating website, demonstrate that popular dating website design choices, such as the user of the swipe interface (swiping in one direction to indicate a like and in the other direction to express a dislike) and match scores, resulted in people racially biases choices even when they explicitly claimed not to have considered race in their decision-making. This bias was significantly reduced when the order of information presentation was reversed such that people first saw substantive profile information related to their explicitly-stated preferences before seeing the profile name and photo. These results indicate that currently-popular design choices amplify people's implicit biases in their choices of potential romantic partners, but the effects of the implicit biases can be reduced by carefully redesign the dating website interfaces.more » « less
An official website of the United States government
